95 research outputs found

    Compiling for an Heterogeneous Vector Image Processor

    No full text
    International audienceWe present a new compilation strategy, implemented at a small cost, to optimize image applications developed on top of a high level image processing library for an heterogeneous processor with a vector image processing accelerator. The library provides the semantics of the image computations. The pipelined structure of the accelerator allows to compute whole expressions with dozens of elementary image instructions, but is constrained as intermediate image values cannot be extracted. We adapted standard compilation techniques to perform this task automatically. Our strategy is implemented in PIPS, a source-to-source compiler which greatly reduces the development cost as standard phases are reused and parameterized for the target. Experiments were run on the hardware functional simulator. We compile 1217 cases, from elementary tests to full applications. All are optimal but a few which are mostly within a mere accelerator call of optimality. Our contribu- tions include: 1) a general low cost compilation strategy for image processing applications, based on the semantics provided by library calls, which improves locality by an order of magnitude; 2) a specific heuristic to minimize execution time on the target vector accelerator; 3) numerous experiments that show the effectiveness of our strategy

    Compilation pour cibles hétérogènes : automatisation des analyses, transformations et décisions nécessaires

    No full text
    8 pagesInternational audienceLes accélérateurs matériels, telles les cartes FPGA ou les cartes graphiques, apportent une alternative ou un complément intéressant aux processeurs multi-coeurs classiques pour de nombreuses applications scientifiques. Il est cependant coûteux et difficile d'y porter des applications existantes ; et les compilateurs standards, traditionnellement portés sur la génération de code pour processeurs séquentiels, ne disposent pas des abstractions nécessaires à la génération automatique et re-ciblable de code pour ces nouvelles cibles. Cet article présente un ensemble de transformations de code de haut niveau reposant sur une abstraction à plusieurs niveaux de l'architecture des accélérateurs actuels et permettant de construire des compilateurs spécifiques à chaque cible en se basant sur une infrastructure commune. Ces transformations ont été utilisées pour construire avec PIPS deux compilateurs complètement automatisés pour un processeur embarqué à base de FPGA et pour GPU NVIDIA avec PAR4ALL

    Computing Invariants with Transformers: Experimental Scalability and Accuracy

    Get PDF
    International audienceUsing abstract interpretation, invariants are usually obtained by solving iteratively a system of equations linking preconditions according to program statements. However, it is also possible to abstract first the statements as transformers, and then propagate the preconditions using the transformers. The second approach is modular because procedures and loops can be abstracted once and for all, avoiding an iterative resolution over the call graph and all the control flow graphs. However, the transformer approach based on polyhedral abstract domains encurs two penalties: some invariant accuracy may be lost when computing transformers, and the execution time may increase exponentially because the dimension of a transformer is twice the dimension of a precondition. The purposes of this article are 1) to measure the benefits of the modular approach and its drawbacks in terms of execution time and accuracy using significant examples and a newly developed benchmark for loop invariant analysis, ALICe, 2) to present a new technique designed to reduce the accuracy loss when computing transformers, 3) to evaluate experimentally the accuracy gains this new technique and other previously discussed ones provide with ALICe test cases and 4) to compare the executions times and accuracies of different tools, ASPIC, ISL, PAGAI and PIPS. Our results suggest that the transformer-based approach used in PIPS, once improved with transformer lists, is as accurate as the other tools when dealing with the ALICe benchmark. Its modularity nevertheless leads to shorter execution times when dealing with nested loops and procedure calls found in real applications

    ALICe: A Framework to Improve Affine Loop Invariant Computation

    No full text
    International audienceA crucial point in program analysis is the computation of loop invariants. Accurate invariants are required to prove properties on a program but they are difficult to compute. Extensive research has been carried out but, to the best of our knowledge, no benchmark has ever been developed to compare algorithms and tools. We present ALICe, a toolset to compare automatic computation techniques of affine loop scalar invariants. It comes with a benchmark that we built using 102 test cases which we found in the loop invariant bibliography, and interfaces with three analysis programs, that rely on different techniques: Aspic, ISL and PIPS. Conversion tools are provided to handle format heterogeneity of these programs. Experimental results show the importance of model coding and the poor performances of PIPS on concurrent loops. To tackle these issues, we use two model restructurations techniques whose correctness is proved in Coq, and discuss the improvements realized

    Preservation of Lyapunov-Theoretic Proofs: From Real to Floating-Point Arithmetic

    No full text
    In a paper, Feron presents how Lyapunov-theoretic proofs of stability can be migrated toward computer-readable and verifiable certificates of control software behavior by relying of Floyd's and Hoare's proof system. However, Lyapunov-theoretic proofs are addressed towards exact, real arithmetic and do not accurately represent the behavior of realistic programs run with machine arithmetic. We address the issue of preserving those proofs in presence of rounding errors resulting from the use of floating-point arithmetic: we present an automatic tool, based on a theoretical framework the soundness of which is proved in Coq, that translates Feron's proof invariants on real arithmetic to similar invariants on floating-point numbers, and preserves the proof structure. We show how our methodology allows to verify whether stability invariants still hold for the concrete implementation of the controller. We study in details the application of our tool to the open-loop system of Feron's paper and show that stability is preserved out of the box. We also translate Feron's proof for the closed-loop system, and discuss the conditions under which the system remains stable

    Inondations dominées de Graphes Valués

    No full text
    Projet TIMC (Traitement d'Images Multi-Cible)National audienceEnjeux et verrous technologiques : Si on considère un niveau de gris comme une altitude, toute image à niveaux de gris peut être vue comme un relief topographique. La morphologie mathématique a développé les nivellements, opérateurs puissants pour le filtrage de bruit ou la simplification d'images avant segmentation. Les nivellements agissent localement comme des inondations, remplissant les cuvettes par des lacs ou comme des arasements, aplanissant les pics. Ces opérations sont très coûteuses en temps de calcul et les images à traiter sont de plus en plus volumineusesL'enjeu est de trouver des algorithmes rapides, parallélisables, et pouvant s'exécuter sur des architectures variées. Compétences développées : Considérons un relief topographique dont l'inondation a un niveau uniforme. A mesure que ce niveau augmente, de nouveaux lacs se créent dans des minima et d'autres lacs fusionnent. La suite des lacs ainsi créée a une structure arborescente.Cette remarque est à la base de notre travail : - Construction à partir d'une image 2D ou 3D de la structure arborescente - Modélisation mathématique des inondations sur une telle structure (structure de treillis des inondations, inondation maximale sous un plafond) - Développement d'un algorithme d'inondation se factorisant en inondations multiples sur des sous-arbres beaucoup plus petits. - Retour à l'image pour visualiser le résultat.Résultats : - Mise au point et validation d'un nouvel algorithme - Conception d'un simulateur interactif - Mise en oeuvre efficace (20 millions de nœuds en quelques secondes) - Parallélisation de la phase du calcul des hauteurs finales - Mise en oeuvre parallèle efficace - Publication et présentation dans une conférence internationale.Impacts et perspectives : Les algorithmes d'inondation sont l'ingrédient de base de nombreux filtres morphologiques, tels que les nivellements, indispensables pour traiter des problèmes complexes. Pour une tâche donnée, de nombreux nivellements peuvent être nécessaires. Ils sont souvent utilisés en cascade pour une analyse multi-échelle de textures. Ne trouveront leur place dans des applications industrielles ou médicales que les algorithmes suffisamment rapides et capables de traiter de gros volumes de données. Il en va de même dans des applications interactives où le temps de réponse doit être immédiat

    Programmation Haute Performance pour Architectures hybrides

    No full text
    National audienceProposer un modèle de distribution des calculs et des données et générer de façon semi-automatique un code parallèle efficace en consommation mémoire et en temps d'exécution

    Mixing Systems Engineering and Enterprise Modelling principles to formalize a SE processes deployment approach in industry

    No full text
    12 pagesInternational audienceSystems Engineering (SE) is a tried and tested methodological approach to design and test new products. It acts as a modelbased engineering approach and promotes for this purpose a set of standardized collaborative processes, modelling languages and frameworks. In a complementary way, Enterprise Modelling (EM) provides concepts, techniques and means to model businesses along with their processes. The purpose of this paper is to provide a method for the deployment of SE processes considering interoperability and building bridges between SE and EM. An application case is given illustrating the definition of the stakeholder requirements definition process defined in the ISO 15288:2008

    Towards a method to deploy systems engineering processes within companies

    No full text
    Systems Engineering (SE) approach is a tried and tested approach that promotes and coordinates all appropriate processes to design, develop and test a system. These SE processes have been defined in many standards which are not always consistent with each other and often provide only generic indications. Therefore, companies seeking to apply the SE approach must answer themselves the following questions: how to tailor these generic processes to their company? What methodology must be applied to deploy SE processes? How to ensure the success of this deployment? The purpose of this paper is to present the two main principles of a SE processes deployment methodological approach currently under development and applied to a helicopter manufacturer. These principles are: 1) The description of the set of activities necessary for the deployment, 2) The main concepts necessary to the approach, gathered and shortly formalised in a global meta-model

    Interoperability Assessment in the Deployment of Technical Processes in Industry

    No full text
    6 pagesInternational audienceIncreasing competition on markets induces a vital need for companies to improve their efficiency and reactivity. For this, a solution is to deploy, improve and manage their processes while paying a special attention on the abilities of the resources involved. Particularly, the interoperability of the latter is considered in this article as a challenge conditioning the success of the deployment. Consequently, this paper presents a mean to assess interoperability of the resources involved in a process during all its life cycle
    • …
    corecore